Open software supply chain attacks, once successful, can exact heavy costs in mission-critical applications. As open-source ecosystems for deep learning flourish and become increasingly universal, they present attackers previously unexplored avenues to code-inject malicious backdoors in deep neural network models. This paper proposes Flareon, a small, stealthy, seemingly harmless code modification that specifically targets the data augmentation pipeline with motion-based triggers. Flareon neither alters ground-truth labels, nor modifies the training loss objective, nor does it assume prior knowledge of the victim model architecture, training data, and training hyperparameters. Yet, it has a surprisingly large ramification on training -- models trained under Flareon learn powerful target-conditional (or "any2any") backdoors. The resulting models can exhibit high attack success rates for any target choices and better clean accuracies than backdoor attacks that not only seize greater control, but also assume more restrictive attack capabilities. We also demonstrate the effectiveness of Flareon against recent defenses. Flareon is fully open-source and available online to the deep learning community: https://github.com/lafeat/flareon.
translated by 谷歌翻译
Recently, the success of pre-training in text domain has been fully extended to vision, audio, and cross-modal scenarios. The proposed pre-training models of different modalities are showing a rising trend of homogeneity in their model structures, which brings the opportunity to implement different pre-training models within a uniform framework. In this paper, we present TencentPretrain, a toolkit supporting pre-training models of different modalities. The core feature of TencentPretrain is the modular design. The toolkit uniformly divides pre-training models into 5 components: embedding, encoder, target embedding, decoder, and target. As almost all of common modules are provided in each component, users can choose the desired modules from different components to build a complete pre-training model. The modular design enables users to efficiently reproduce existing pre-training models or build brand-new one. We test the toolkit on text, vision, and audio benchmarks and show that it can match the performance of the original implementations.
translated by 谷歌翻译
The Transformer is an extremely powerful and prominent deep learning architecture. In this work, we challenge the commonly held belief in deep learning that going deeper is better, and show an alternative design approach that is building wider attention Transformers. We demonstrate that wide single layer Transformer models can compete with or outperform deeper ones in a variety of Natural Language Processing (NLP) tasks when both are trained from scratch. The impact of changing the model aspect ratio on Transformers is then studied systematically. This ratio balances the number of layers and the number of attention heads per layer while keeping the total number of attention heads and all other hyperparameters constant. On average, across 4 NLP tasks and 10 attention types, single layer wide models perform 0.3% better than their deep counterparts. We show an in-depth evaluation and demonstrate how wide models require a far smaller memory footprint and can run faster on commodity hardware, in addition, these wider models are also more interpretable. For example, a single layer Transformer on the IMDb byte level text classification has 3.1x faster inference latency on a CPU than its equally accurate deeper counterpart, and is half the size. We therefore put forward wider and shallower models as a viable and desirable alternative for small models on NLP tasks, and as an important area of research for domains beyond this.
translated by 谷歌翻译
神经网络容易受到对抗性示例的影响,可导致模型失败。对抗训练是阻止对抗例子的解决方案之一。模型在训练过程中受到攻击,并学会对其进行弹性。然而,这样的过程目前很昂贵 - 它需要很长时间才能用对抗样本生产和训练模型,而且更糟糕的是,偶尔会失败。在本文中,我们证明了通过数据子采样来提高对抗性训练效率的数据修剪方法。我们从经验上表明,数据修剪会改善对抗性训练的收敛性和可靠性,尽管具有不同水平的公用事业降级。例如,我们观察到,使用CIFAR10的随机子采样删除40%的数据,我们对最强大的攻击者失去了8%的对抗精度,而仅使用20%的数据,我们就会损失14%的对抗精度,并减少运行时的运行时间。第3个因素。有趣的是,我们发现在某些环境中,数据修剪带来了两个世界的好处 - 既可以提高对抗的准确性和训练时间。
translated by 谷歌翻译
机器学习容易受到对抗操作的影响。先前的文献表明,在训练阶段,攻击者可以操纵数据和数据采样程序以控制模型行为。一个共同的攻击目标是种植后门,即迫使受害者模型学会识别只有对手知道的触发因素。在本文中,我们引入了一类新的后门攻击类,这些攻击隐藏在模型体系结构内,即在用于训练的功能的电感偏置中。这些后门很容易实现,例如,通过为其他人将在不知不觉中重复使用的后式模型体系结构发布开源代码。我们证明,模型架构后门代表了一个真正的威胁,与其他方法不同,可以从头开始进行完整的重新训练。我们将建筑后门背后的主要构建原理(例如输入和输出之间的链接)形式化,并描述对它们的一些可能的保护。我们评估了对不同尺度的计算机视觉基准测试的攻击,并证明在各种培训环境中,潜在的脆弱性无处不在。
translated by 谷歌翻译
联合学习(FL)是一种强大的技术,用于以隐私保留方式从来自多个客户端的数据训练服务器上的模型。在FL中,服务器将模型发送到每个客户端,然后在本地培训模型并将其发送回服务器。服务器聚合更新的模型,并重复几轮的过程。 FL突出了显着的通信成本,特别是在将更新的本地模型从客户端发送回服务器时。最近提出的算法量化了模型参数,以有效地压缩流动。这些算法通常具有控制压缩因子的量化水平。我们发现量化水平的动态调整可以促进压缩而不会牺牲模型质量。首先,我们介绍了一种时间自适应量化算法,其随着训练的进展而增加量化级别。其次,我们介绍了一种客户自适应量化算法,该算法在每一轮中分配每个单独的客户端最佳量化级别。最后,我们将这两种算法与双自适应量化算法相结合。我们的实验表明,DadaQuant一贯改善客户$ \ lightarrow $服务器压缩,优于最强的非自适应基线,最高可达2.8美元。
translated by 谷歌翻译
Imitation learning (IL) is a simple and powerful way to use high-quality human driving data, which can be collected at scale, to identify driving preferences and produce human-like behavior. However, policies based on imitation learning alone often fail to sufficiently account for safety and reliability concerns. In this paper, we show how imitation learning combined with reinforcement learning using simple rewards can substantially improve the safety and reliability of driving policies over those learned from imitation alone. In particular, we use a combination of imitation and reinforcement learning to train a policy on over 100k miles of urban driving data, and measure its effectiveness in test scenarios grouped by different levels of collision risk. To our knowledge, this is the first application of a combined imitation and reinforcement learning approach in autonomous driving that utilizes large amounts of real-world human driving data.
translated by 谷歌翻译
Considerable progress has recently been made in leveraging CLIP (Contrastive Language-Image Pre-Training) models for text-guided image manipulation. However, all existing works rely on additional generative models to ensure the quality of results, because CLIP alone cannot provide enough guidance information for fine-scale pixel-level changes. In this paper, we introduce CLIPVG, a text-guided image manipulation framework using differentiable vector graphics, which is also the first CLIP-based general image manipulation framework that does not require any additional generative models. We demonstrate that CLIPVG can not only achieve state-of-art performance in both semantic correctness and synthesis quality, but also is flexible enough to support various applications far beyond the capability of all existing methods.
translated by 谷歌翻译
在NLP中,句子的语义表示学习是一个重要且研究的问题。该任务的当前趋势涉及通过与文本的对比目标进行培训基于变压器的句子编码器,即具有语义上相似的含义并散布他人的聚类句子。在这项工作中,我们发现,通过使用另一种模式(例如,句子和不相关的图像/音频数据),使用多模式多任务损失的训练,可以通过多模式多任务损失进行训练来改进变压器模型的性能。特别是,除了通过文本的对比损失学习外,我们的模型簇还来自非语言域(例如,视觉/音频),同时具有相似的对比度损失。我们框架对未配对的非语言数据的依赖使IT语言不可思议,从而使其在英语NLP之外广泛适用。在7个语义文本相似性基准上进行的实验表明,经过其他非语言(图像/音频)对比目标训练的模型可导致更高质量的句子嵌入。这表明变压器模型能够通过执行类似的任务(即聚类),并以多任务方式的不同模式的示例来更好地概括。
translated by 谷歌翻译
对于视觉操作任务,我们旨在表示具有语义上有意义的功能的图像内容。但是,从图像中学习隐式表示通常缺乏解释性,尤其是当属性交织在一起时。我们专注于仅从2D图像数据中提取删除的3D属性的具有挑战性的任务。具体而言,我们专注于人类外观,并从RGB图像中学习穿着人类的隐性姿势,形状和服装表示。我们的方法学习了这三个图像属性的分解潜在表示的嵌入式,并通过2到3D编码器解码器结构可以有意义地重新组装特征和属性控制。 3D模型仅从学到的嵌入空间中的特征图推断出来。据我们所知,我们的方法是第一个解决这个高度不足的问题的跨域分解的方法。我们在定性和定量上证明了框架在虚拟数据上3D重建中转移姿势,形状和服装的能力,并显示隐性形状损失如何使模型恢复细粒度重建细节的能力有益。
translated by 谷歌翻译